在本文中,我们提出了三种方法方法方法神经网络(PMNN),逆功率方法神经网络(IPMNN),并转移了逆力方法神经网络(SIPMNN)与功率方法,逆力法和转向逆动力方法求解eigenvalue eigenvalue主要的特征值,最小的特征值和最小的零特征值的问题。这些方法与传统方法共享相似的精神,但是差异是通过自动分化(AD),神经网络学到的本征函数实现的差异操作员以及通过优化特定定义的损失函数实现的迭代。我们在高维度的几个数值示例中检查了我们方法的适用性和准确性。通过我们的多维问题方法获得的数值结果表明,我们的方法可以提供准确的本征值和本征函数近似值。
translated by 谷歌翻译
机器学习和人工智能的进步正在促进公共道路上的自动车辆(AVS)的测试和部署。加利福尼亚州机动车部(CA DMV)推出了自主车辆测试程序,该计划收集和发布与自主驾驶自主驾驶的自主车辆脱离(AVD)相关的报告。了解AVD的原因对于提高AV系统的安全性和稳定性并提供AV测试和部署的指导至关重要。在这项工作中,构建可扩展的端到端管道以采用自然语言处理深度转移学习从2014年到2020年从2014年到2020年发布的脱离发电报告。使用分类,可视化和统计测试脱离数据分析揭示了AV测试,分类原因频率和AVD的原因与效果之间的显着关系趋势。我们发现(1)制造商在春季和/或冬季进行了密集地测试了AVS,(2)测试司机启动了超过80%的脱离,而感知,本地化和映射的误差超过75%的脱离,规划和控制AV系统本身,(3)AVD的发起者与原因类别之间存在重大关系。本研究用作使用预先训练的模型的深度转移学习的成功实践,并生成综合的脱离语数据库,允许进一步调查其他研究人员。
translated by 谷歌翻译
在本文中,我们描述了一种基于图的算法,该算法使用自我监管的变压器获得的功能来检测图像和视频中的显着对象。使用这种方法,将构成图像或视频的图像贴片组织成一个完全连接的图,其中每对贴片之间的边缘使用变压器学到的功能在补丁之间标记为相似性得分。然后将显着物体的检测和分割作为图形问题配制,并使用经典的归一化切割算法解决。尽管这种方法很简单,但它仍可以在几个常见的图像和视频检测和分割任务上实现最新结果。对于无监督的对象发现,当使用VOC07,VOC12和COCO20K数据集进行测试时,这种方法的优于竞争方法的差距分别为6.1%,5.7%和2.6%。对于图像中无监督的显着性检测任务,此方法将联合(IOU)的交叉分数提高了4.4%,5.6%和5.2%。与当前最新技术相比,与ECSD,DUTS和DUT-OMRON数据集进行测试时。该方法还通过戴维斯,SEGTV2和FBMS数据集为无监督的视频对象分割任务实现了竞争结果。
translated by 谷歌翻译
面部表达识别(FER)遭受由含糊不清的面部图像和注释者的主观性引起的数据不确定性,导致了文化语义和特征协变量转移问题。现有作品通常通过估计噪声分布或通过从干净的数据中学到的知识引导网络培训来纠正标签错误的数据,从而忽略了表达式的关联关系。在这项工作中,我们提出了一种基于自适应的特征归一化(AGFN)方法,以通过将特征分布与表达式结合标准化,以保护FER模型免受数据不确定性。具体而言,我们提出了一个泊松图生成器,以通过采样过程在每个迷你批次中自适应地构造样品的拓扑图,并相应地设计了坐标下降策略来优化提出的网络。我们的方法优于最先进的方法,在基准数据集Ferplus和RAF-DB上,精度为91.84%和91.11%,当错误标记的数据的百分比增加(例如20%)时,我们的网络超越了。现有的工作量显着占3.38%和4.52%。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译